3,453 research outputs found
Simulating Distributed Systems
The simulation framework developed within the "Models of Networked Analysis at Regional Centers" (MONARC) project as a design and optimization tool for large scale distributed systems is presented. The goals are to provide a realistic simulation of distributed computing systems, customized for specific physics data processing tasks and to offer a flexible and dynamic environment to evaluate the performance of a range of possible distributed computing architectures. A detailed simulation of a large system, the CMS High Level Trigger (HLT) production farm, is also presented
Object Database Scalability for Scientific Workloads
We describe the PetaByte-scale computing challenges posed by the next generation of particle physics experiments, due to start operation in 2005. The computing models adopted by the experiments call for systems capable of handling sustained data acquisition rates of at least 100 MBytes/second into an Object Database, which will have to handle several PetaBytes of accumulated data per year. The systems will be used to schedule CPU intensive reconstruction and analysis tasks on the highly complex physics Object data which need then be served to clients located at universities and laboratories worldwide. We report on measurements with a prototype system that makes use of a 256 CPU HP Exemplar X Class machine running the Objectivity/DB database. Our results show excellent scalability for up to 240 simultaneous database clients, and aggregate I/O rates exceeding 150 Mbytes/second, indicating the viability of the computing models
Search for Randall-Sundrum excitations of gravitons decaying into two photons for CMS at LHC
The CMS detector discovery potential to the resonant production of massive Kaluza - Klein excitations expected in Randall-Sundrum model is studied. Full simulation and reconstruction are used to study diphoton decay of Randall-Sundrum gravitons. For an integrated luminosity of 30 fb^-1 diphoton decay of Randall-Sundrum graviton can be discovered at 5 sigma level for masses up to 1.61~tevsucqua in case of weak coupling between graviton excitations and Standard model particles (c=0.01). Heavier resonances can be detected for larger coupling constant (c=0.1), with mass reach of 3.95~tevsucqua
The Clarens web services architecture
Clarens is a uniquely flexible web services infrastructure providing a
unified access protocol to a diverse set of functions useful to the HEP
community. It uses the standard HTTP protocol combined with application layer,
certificate based authentication to provide single sign-on to individuals,
organizations and hosts, with fine-grained access control to services, files
and virtual organization (VO) management. This contribution describes the
server functionality, while client applications are described in a subsequent
talk.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 6 pages, LaTeX, 4 figures, PSN
MONT00
Distributed Heterogeneous Relational Data Warehouse In A Grid Environment
This paper examines how a "Distributed Heterogeneous Relational Data
Warehouse" can be integrated in a Grid environment that will provide physicists
with efficient access to large and small object collections drawn from
databases at multiple sites. This paper investigates the requirements of
Grid-enabling such a warehouse, and explores how these requirements may be met
by extensions to existing Grid middleware. We present initial results obtained
with a working prototype warehouse of this kind using both SQLServer and
Oracle9i, where a Grid-enabled web-services interface makes it easier for
web-applications to access the distributed contents of the databases securely.
Based on the success of the prototype, we proposes a framework for using
heterogeneous relational data warehouse through the web-service interface and
create a single "Virtual Database System" for users. The ability to
transparently access data in this way, as shown in prototype, is likely to be a
very powerful facility for HENP and other grid users wishing to collate and
analyze information distributed over Grid.Comment: 4 pages, 6 figure
Clarens Client and Server Applications
Several applications have been implemented with access via the Clarens web
service infrastructure, including virtual organization management, JetMET
physics data analysis using relational databases, and Storage Resource Broker
(SRB) access. This functionality is accessible transparently from Python
scripts, the Root analysis framework and from Java applications and browser
applets.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, LaTeX, no figures, PSN
TUCT00
A Quantum Monte Carlo Method at Fixed Energy
In this paper we explore new ways to study the zero temperature limit of
quantum statistical mechanics using Quantum Monte Carlo simulations. We develop
a Quantum Monte Carlo method in which one fixes the ground state energy as a
parameter. The Hamiltonians we consider are of the form
with ground state energy E. For fixed and V, one can view E as a
function of whereas we view as a function of E. We fix E
and define a path integral Quantum Monte Carlo method in which a path makes no
reference to the times (discrete or continuous) at which transitions occur
between states. For fixed E we can determine and other ground
state properties of H
The MONARC toolset for simulating large network-distributed processing systems
The next generation of High Energy Physics experiments have envisaged the use of network-distributed Petabyte-scale data handling and computing systems of unprecedented complexity. The general concept is that of a "Data Grid Hierarchy" in which the central facility at the European Laboratory for Particle Physics (CERN) in Geneva will interact and coherently manage tasks shared by and distributed amongst national "Tier1 (National) Regional Centres" situated in the US, Europe, and Asia. CERN and the Tier1 Centers will further communicate and task-share with the Tier2 Regional Centers, Tier3 centers serving individual universities or research groups, and thousands of "Tier4" desktops and small servers.
The design and optimization of systems with this level of complexity requires a realistic description and modeling of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workload presented by hundreds of jobs running concurrently on large scale distributed systems exchanging very large amounts of data.
The simulation toolset developed within the "Models Of Networked Analysis at Regional Centers" - MONARC project provides a code and execution time-efficient design and optimisation framework for large scale distributed systems. A process-oriented approach for discrete event simulation has been adopted because it is well suited to describe various activities running concurrently, as well the stochastic arrival patterns typical of this class of simulations. Threaded objects or "Active Objects" provide a natural way to map the specific behaviour of distributed data processing (and the required flows of data across the networks) into the simulation program.
This simulation program is based on Java2(™) technology because of the support for the necessary methods and techniques needed to develop an efficient and flexible distributed process oriented simulation. This includes a convenient set of interactive graphical presentation and analysis tools, which are essential for the development and effective use of the simulation system.
The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modelling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures. Comparison between queuing theory and realistic client-server measurements is also presented
- …